We study the learning dynamics of self-predictive learning for reinforcement learning, a family of algorithms that learn representations by minimizing the prediction error of their own future latent representations. Despite its recent empirical success, such algorithms have an apparent defect: trivial representations (such as constants) minimize the prediction error, yet it is obviously undesirable to converge to such solutions. Our central insight is that careful designs of the optimization dynamics are critical to learning meaningful representations. We identify that a faster paced optimization of the predictor and semi-gradient updates on the representation, are crucial to preventing the representation collapse. Then in an idealized setup, we show self-predictive learning dynamics carries out spectral decomposition on the state transition matrix, effectively capturing information of the transition dynamics. Building on the theoretical insights, we propose bidirectional self-predictive learning, a novel self-predictive algorithm that learns two representations simultaneously. We examine the robustness of our theoretical insights with a number of small-scale experiments and showcase the promise of the novel representation learning algorithm with large-scale experiments.
translated by 谷歌翻译
我们介绍了一种改进政策改进的方法,该方法在基于价值的强化学习(RL)的贪婪方法与基于模型的RL的典型计划方法之间进行了插值。新方法建立在几何视野模型(GHM,也称为伽马模型)的概念上,该模型对给定策略的折现状态验证分布进行了建模。我们表明,我们可以通过仔细的基本策略GHM的仔细组成,而无需任何其他学习,可以评估任何非马尔科夫策略,以固定的概率在一组基本马尔可夫策略之间切换。然后,我们可以将广义政策改进(GPI)应用于此类非马尔科夫政策的收集,以获得新的马尔可夫政策,通常将其表现优于其先驱。我们对这种方法提供了彻底的理论分析,开发了转移和标准RL的应用,并在经验上证明了其对标准GPI的有效性,对充满挑战的深度RL连续控制任务。我们还提供了GHM培训方法的分析,证明了关于先前提出的方法的新型收敛结果,并显示了如何在深度RL设置中稳定训练这些模型。
translated by 谷歌翻译
我们提出BYOL-QUENPLORE,这是一种在视觉复杂环境中进行好奇心驱动的探索的概念上简单但一般的方法。Byol-explore通过优化潜在空间中的单个预测损失而没有其他辅助目标,从而学习了世界代表,世界动态和探索政策。我们表明,BYOL探索在DM-HARD-8中有效,DM-HARD-8是一种具有挑战性的部分可观察的连续操作硬探索基准,具有视觉富含3-D环境。在这个基准上,我们完全通过使用Byol-explore的内在奖励来纯粹通过增强外部奖励来解决大多数任务,而先前的工作只能通过人类的示威来脱颖而出。作为Byol-explore的一般性的进一步证据,我们表明它在Atari的十个最难的探索游戏中实现了超人的性能,同时设计比其他竞争力代理人要简单得多。
translated by 谷歌翻译
自然行为由不可预测的动力学组成,可以突然切换并在许多不同的时间尺度上展开。尽管在受约束或简化的基于任务的条件下构建行为的表示方面已经找到了一些成功,但由于它们假设单一的时间动力学规模,因此无法将其中许多模型应用于自由和自然主义的设置。在这项工作中,我们跨多个尺度(BAMS)引入引导程序,这是一种多尺度表示模型:我们结合了一个汇总模块,该模块汇总了与具有不同时间接收场的编码器上提取的特征,并设计了一组潜在目标,以进行引导程序各个空间中的表示,以鼓励不同时间尺度的分离。我们首先将我们的方法应用于在不同地形类型中导航的四倍的数据集上,并表明我们的模型捕获了行为的时间复杂性。然后,我们将我们的方法应用于MABE 2022多代理行为挑战,我们的模型在两个子任务中排名第三,第一个排名第1,并在分析行为时显示了合并多时间尺度的重要性。
translated by 谷歌翻译
现代分类算法易于对抗对抗示例 - 对导致算法产生不期望的行为的输入扰动。在这项工作中,我们寻求理解和扩展跨域的对抗性示例,其中输入是离散的,特别是在新域中,例如计算生物学。作为实现这一目标的步骤,我们正规化了在任何离散设置中应用的同义对手示例的概念,并描述了构建此类示例的简单域 - 不可原谅算法。我们在多个域施用该算法 - 包括情绪分析和DNA序列分类 - 并发现它一直揭示逆势实例。我们从理论上寻求理解他们的普遍性,我们将其存在归因于虚假令牌相关性,这是一个特定于离散空间的统计现象。我们的作品是朝着朝向与连续输入类似的离散对抗的例子的域名侵害治疗的一步。
translated by 谷歌翻译
自我监督的学习提供了一个有希望的途径,消除了在图形上的代表学习中的昂贵标签信息的需求。然而,为了实现最先进的性能,方法通常需要大量的负例,并依赖于复杂的增强。这可能是昂贵的,特别是对于大图。为了解决这些挑战,我们介绍了引导的图形潜伏(BGRL) - 通过预测输入的替代增强来学习图表表示学习方法。 BGRL仅使用简单的增强,并减轻了对否定例子对比的需求,因此通过设计可扩展。 BGRL胜过或匹配现有的几种建立的基准,同时降低了内存成本的2-10倍。此外,我们表明,BGR1可以缩放到半监督方案中的数亿个节点的极大的图表 - 实现最先进的性能并改善监督基线,其中表示仅通过标签信息而塑造。特别是,我们的解决方案以BGRL为中心,将kdd杯2021的开放图基准的大规模挑战组成了一个获奖条目,在比所有先前可用的基准更大的级别的图形订单上,从而展示了我们方法的可扩展性和有效性。
translated by 谷歌翻译
强化学习中的信用作业是衡量行动对未来奖励的影响的问题。特别是,这需要从运气中分离技能,即解除外部因素和随后的行动对奖励行动的影响。为实现这一目标,我们将来自因果关系的反事件的概念调整为无模型RL设置。关键思想是通过学习从轨迹中提取相关信息来应对未来事件的价值函数。我们制定了一系列政策梯度算法,这些算法使用这些未来条件的价值函数作为基准或批评,并表明它们是可怕的差异。为避免对未来信息的调理潜在偏见,我们将后视信息限制为不包含有关代理程序行为的信息。我们展示了我们对许多说明性和具有挑战性问题的算法的功效和有效性。
translated by 谷歌翻译
Automated offensive language detection is essential in combating the spread of hate speech, particularly in social media. This paper describes our work on Offensive Language Identification in low resource Indic language Marathi. The problem is formulated as a text classification task to identify a tweet as offensive or non-offensive. We evaluate different mono-lingual and multi-lingual BERT models on this classification task, focusing on BERT models pre-trained with social media datasets. We compare the performance of MuRIL, MahaTweetBERT, MahaTweetBERT-Hateful, and MahaBERT on the HASOC 2022 test set. We also explore external data augmentation from other existing Marathi hate speech corpus HASOC 2021 and L3Cube-MahaHate. The MahaTweetBERT, a BERT model, pre-trained on Marathi tweets when fine-tuned on the combined dataset (HASOC 2021 + HASOC 2022 + MahaHate), outperforms all models with an F1 score of 98.43 on the HASOC 2022 test set. With this, we also provide a new state-of-the-art result on HASOC 2022 / MOLD v2 test set.
translated by 谷歌翻译
We consider semi-supervised binary classification for applications in which data points are naturally grouped (e.g., survey responses grouped by state) and the labeled data is biased (e.g., survey respondents are not representative of the population). The groups overlap in the feature space and consequently the input-output patterns are related across the groups. To model the inherent structure in such data, we assume the partition-projected class-conditional invariance across groups, defined in terms of the group-agnostic feature space. We demonstrate that under this assumption, the group carries additional information about the class, over the group-agnostic features, with provably improved area under the ROC curve. Further assuming invariance of partition-projected class-conditional distributions across both labeled and unlabeled data, we derive a semi-supervised algorithm that explicitly leverages the structure to learn an optimal, group-aware, probability-calibrated classifier, despite the bias in the labeled data. Experiments on synthetic and real data demonstrate the efficacy of our algorithm over suitable baselines and ablative models, spanning standard supervised and semi-supervised learning approaches, with and without incorporating the group directly as a feature.
translated by 谷歌翻译
Commonsense knowledge-graphs (CKGs) are important resources towards building machines that can 'reason' on text or environmental inputs and make inferences beyond perception. While current CKGs encode world knowledge for a large number of concepts and have been effectively utilized for incorporating commonsense in neural models, they primarily encode declarative or single-condition inferential knowledge and assume all conceptual beliefs to have the same likelihood. Further, these CKGs utilize a limited set of relations shared across concepts and lack a coherent knowledge organization structure resulting in redundancies as well as sparsity across the larger knowledge graph. Consequently, today's CKGs, while useful for a first level of reasoning, do not adequately capture deeper human-level commonsense inferences which can be more nuanced and influenced by multiple contextual or situational factors. Accordingly, in this work, we study how commonsense knowledge can be better represented by -- (i) utilizing a probabilistic logic representation scheme to model composite inferential knowledge and represent conceptual beliefs with varying likelihoods, and (ii) incorporating a hierarchical conceptual ontology to identify salient concept-relevant relations and organize beliefs at different conceptual levels. Our resulting knowledge representation framework can encode a wider variety of world knowledge and represent beliefs flexibly using grounded concepts as well as free-text phrases. As a result, the framework can be utilized as both a traditional free-text knowledge graph and a grounded logic-based inference system more suitable for neuro-symbolic applications. We describe how we extend the PrimeNet knowledge base with our framework through crowd-sourcing and expert-annotation, and demonstrate its application for more interpretable passage-based semantic parsing and question answering.
translated by 谷歌翻译